The Method

AI as Your Intake Assistant

Three steps replace the conventional pattern of re-reading notes, reconstructing context, and manually authoring a requirements document from scratch.

01

Record & transcribe.

Hold requirements meetings in Teams with transcription enabled. Even dictating your own notes immediately after a stakeholder call works — the input doesn't have to be verbatim dialogue. The goal is capturing what was said before memory degrades, not producing a perfect transcript.

02

Feed the transcript to Claude.

Upload the meeting transcript (.vtt file) along with your requirements template. Let AI extract and structure the requirements. Claude reads the entire transcript — including the tangential comments that often contain the most important edge cases — and maps it against your template fields.

03

Review & refine.

Claude produces a structured draft in under 30 seconds. Your job shifts from authoring to reviewing and refining. You're applying judgment to output, not spending cognitive energy reconstructing what was said three days ago.

Transcript intake prompt
"I have attached a meeting transcription (meeting.vtt) and a project requirements
template (requirements_template.docx). Please read the transcription and populate
the template with the information gathered, specifically focusing on functional
requirements, acceptance criteria, data context, UI descriptions, edge cases,
and compliance notes. Flag any ambiguities and list follow-up questions."

The same works for emails and voicemails. Forward the email, paste it into Claude, and ask it to extract requirements. No new meetings, no behavior change from stakeholders — just a smarter handoff. The stakeholder's one-sentence Slack message is a valid input. It doesn't have to be a formal brief.

Tips & Tricks

If the transcript is messy, use a two-pass prompt. First extract actors, decisions, and open questions. Then run a second pass that fills the requirement template. Cleanup and authoring work better when they are separate.

Mental Model

The 5-Question Stakeholder Framework

A mental model for capturing the right information in constrained time. You don't need to formally ask these as interview questions — just make sure your notes touch all five areas.

Question 01
WHO is the user, and what is their role and context?
Not just a job title — a specific, situated person. What are they doing right now? What are the pressures on them? Generic personas lead to generic requirements. The more specific the user context, the more actionable the requirement.
"a Loan Officer managing 6 commercial loan applications with an overdue credit decision backlog ahead of a Thursday portfolio review call"
Question 02
WHAT do they need to accomplish?
The action, not the feature. Frame it in terms of what they're trying to do — not what they want the system to have. "I need to identify which sites are at enrollment risk" is different from "I need a dashboard." The task-first framing surfaces the right acceptance criteria naturally.
"identify which branches need immediate intervention before the 9 AM portfolio review call"
Question 03
WHY does this matter financially or operationally?
The outcome — what happens if they can or can't do this? This is where financial software diverges from consumer software. The stakes are audit findings, compliance exceptions, and delayed approvals. The answer to "why" is what gets requirements prioritized when the roadmap is contested.
"if a branch falls behind and we miss it, the credit exception triggers a Helix deviation record and a potential regulatory inquiry"
Question 04
WHAT DOES SUCCESS LOOK LIKE?
Observable behavior — something you could verify in a 5-minute demo. Avoid "intuitive" and "easy to use." Good success criteria are specific enough that a developer could write a test for them without asking a follow-up question.
"I can see all 6 branches' application status ranked by risk on one screen within 3 seconds of login"
Question 05
WHAT CAN GO WRONG?
Edge cases, error states, compliance constraints. This is the question most requirements skip — and the one that generates the most rework. For financial software, always probe: data absence (a branch that hasn't started), data corruption (a branch with impossible values), and audit trail obligations (any manual override).
"what if a branch has no application data yet — does it show 0% or something different?"

You don't need to formally ask these — just make sure your notes touch all five. If you missed one, that's your follow-up email. Then let Claude structure the rest. A five-sentence email summary plus this framework gets you 80% of a well-structured requirement without a second meeting.

Workflow Change

Structuring Early vs. Late

Where requirements get structured in the SDLC determines whether AI tools can help throughout the process — or only after the fact.

Current: Structure Late
Requirements structured at migration
  • Requirements don't get formally structured until migration into Helix ALM near the end of development.
  • By that point, code is written, design is done, and the requirement is largely retroactive documentation.
  • Ambiguities that should have been caught in week one surface as bugs or scope disputes in week eight.
  • AI tools never receive structured input — they can't help with generation, validation, or traceability because there's nothing to reason about yet.
  • Migration becomes a rewriting exercise: translating developer assumptions and Slack threads back into formal requirements language.
Future: Structure Early
Requirements structured at capture
  • Structure requirements right after the stakeholder interaction using AI as your authoring assistant.
  • Two things happen immediately: AI tools can consume requirements throughout the entire SDLC, and ambiguities surface before anyone writes a line of code.
  • Design uses requirements to validate prototypes. Development uses requirements to generate component scaffolding. QA uses requirements to write acceptance tests.
  • Helix migration becomes a formatting exercise — paste already-structured requirements into the correct fields rather than reconstructing intent from memory.
  • The discipline is front-loaded into a 10-minute AI-assisted authoring step, not distributed as rework across the entire team.

Helix ALM doesn't go away. Structuring early doesn't bypass your existing toolchain — it means requirements enter Helix earlier, cleaner, and more completely, because AI helped you structure them at the point of capture. Migration becomes a formatting exercise, not a rewriting exercise. The audit trail in Helix is exactly where it needs to be for SOC 2 Type II compliance — this workflow just ensures it contains real requirements, not post-hoc documentation.

Running Example

Applied to LOAN-2024-Q3

What this looks like in practice — from a raw 20-minute stakeholder call to a fully structured requirement ready for Helix.

Scenario

The product lead holds a 20-minute call with the lead Loan Officer (Sarah Chen) and the Acme Corp project coordinator for portfolio LOAN-2024-Q3. The feature in scope: the Loan Application Review Table. Teams transcription is on. The call is informal — no agenda, just a screen share of Sarah's current Excel workaround.

Raw transcript excerpt (Sarah Chen, Lead Loan Officer)
Sarah Chen · Lead Loan Officer · LOAN-2024-Q3

"We really need to be able to see at a glance which branches are behind on approvals — not just numbers but whether they're actually at risk. Right now I'm building this in Excel every week before the portfolio review call and it takes me 90 minutes. Oh, and we need to be able to sort by how far behind they are. And it needs to be printable for the risk committee — they don't want PDFs, they want something they can mark up."

What Claude extracts
Field Extracted requirement
User story As a Loan Officer managing LOAN-2024-Q3, I need to view application status across all branches ranked by risk level so that I can identify which branches need intervention before the weekly portfolio review call without manual spreadsheet work.
Acceptance criteria Given I open the application review table, when the page loads, then all branches appear sorted by approval risk (furthest from target first).

Given I click Print, when the dialog opens, then a print-optimized layout renders with branch name, target, actual, % to target, and last updated columns visible and correctly formatted for letter-size paper.
Data context Branch ID, Branch Name, Target Approvals, Actual Approvals, % to Target, Last Updated, Risk Status (calculated field: Behind / At Risk / On Track), Days Since Last Update
UI description Sortable table with inline sort controls. Risk status column must use icon + text label — never color alone (WCAG 2.1 AA, color-blind-safe). Browser print functionality using @media print CSS. No pagination for ≤50 branches — full table on load.
Edge cases Branch with 0 approved applications: display "No approval data" rather than 0% to distinguish a branch that hasn't started from one that is genuinely behind on a non-zero target.

Network timeout >10 seconds: show a retry prompt with the last-cached data timestamp rather than an empty state.
Compliance notes Audit trail required for any manual status override. This view contains no PII — anonymized branch codes only. Any status override must be logged in Helix ALM with user ID, timestamp, and reason per SOC 2 Type II controls.
Follow-up questions (1) What constitutes "At Risk" vs "Behind" — is there a defined threshold, e.g., <80% of target vs <60%?
(2) Should the risk committee print view include the risk status column, or is that internal-only?
(3) Is there a maximum number of branches per portfolio this table needs to support beyond 50?
Ready to Use

Try This Monday

Two prompts you can use immediately. Adapt the bracketed placeholders to your feature. No template file required — these work in any Claude conversation.

Prompt 1 — Transcript intake
Use after a meeting, call, or email thread
Paste into Claude
Here are my notes from a stakeholder meeting about [feature name]:

[Paste your notes or transcript here]

Extract all requirements as user stories. For each requirement, provide:
- Story (As a / I need to / So that format with specific role and context)
- Acceptance criteria (Given/When/Then, covering happy path + 2-3 edge cases)
- Data context (inputs, outputs, types, validation rules)
- UI/UX description (layout, key interactions, states)
- Compliance notes (any SOC 2 Type II or PCI-DSS implications)

Flag anything ambiguous and list the follow-up questions I should ask.
Prompt 2 — Follow-up question generator
Use when a requirement feels incomplete
Paste into Claude
I have this partial requirement for [feature]:

[Paste your requirement]

Identify what's missing or ambiguous that would cause an AI code generation
tool to guess wrong. List specific follow-up questions I should ask the
stakeholder, ordered by impact on implementation.

The second prompt is often more valuable than the first. It's the difference between producing a requirement and stress-testing it. Before you hand a requirement to a developer or use it as a Cursor prompt, run it through the follow-up generator. The questions it surfaces are exactly the ones that cause rework if left unanswered.